perm filename KAPLAN[W84,JMC] blob sn#747342 filedate 1984-03-22 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.<<kaplan[w84,jmc]		Letter to Kaplan>>
C00012 ENDMK
C⊗;
.<<kaplan[w84,jmc]		Letter to Kaplan>>
.require "let.pub[let,jmc]" source;
∂CSL Professor David Kaplan↓Department of Philosophy↓UCLA↓Westwood, CA∞

Dear David:

	Thanks for your friendly reaction to my questions and
comments Wednesday.  While my second question was, it seemed
to me, directly apropos of your talk, I was aware that my first
was somewhat peripheral to it.  I made the remark in the hopes,
which I consider sufficiently realized, that a friendly reaction
would make the point of view more respectable among the philosophers
present, some of whom tend to be a bit narrow-minded as to what
constitutes an attitude worthy of serious thought.

	For my own benefit I want to see if I can formulate these
reactions, which are as much to the general situation as to your
talk in particular, in a way that is related to the subject under
discussion.  There are two points, but they are related.

	1. The easy case.  Laymen often criticize philosophers for
splitting hairs, since the examples considered often seem far-fetched.
Philosophers often recognize that the examples are far-fetched,
but face the fact that the far-fetched examples seem to be necessary
to test our understanding of the concepts.  The AI researcher would
like to side with the layman, because we are far from being able
to make programs that use concepts as well as even the most naive
layman.  However, we need formalized versions of the concepts, and
consideration of what are the facts about the concepts that
must somehow be represented in the program or data structure of
the computer often leads to exactly the questions that occupy
the philosophers.  However again, the philosophers aren't promising
conclusive answers to the questions on any particular time scale.
Therefore, the AI researcher asks:  Must we solve all the philosophical
problems of belief and knowledge before we can put facts about
who knows what and who believes what and how beliefs are changed
by experience into a database of common sense knowledge to be
used by a robot?  The answer had better be no and in the same
sense that a person doesn't have to understand all these problems
in order to ascribe beliefs and use this ascription to determine
the effects of his future actions.

	By itself all this is wishful thinking, but one can make
some concrete proposals.  First we need to distinguish the easy
cases of the concepts.  I will consider only belief, partly
because it was the main subject of your talk, but also because
it is the concept I have thought the most about from the AI
point of view.

	In my opinion we need something like the indirect ascription
of belief.  We need to be able to say the dog believes the bone
is buried under the tree without worrying about what concept the
dog has of bones or whether the dog has an internal language.
However, it seems that this ascription involves some presumption
that the dog divides the world up in a way that makes a discrete
object of the bone more or less in the same way we do.  Our observations
of the dog running around the yard and picking up the bone when he
finds it confirm at least this much.  We would have much less confidence
in supposing that an ant regards a large bone as an object rather than
merely noting (if that much) that it was now walking on an area of
different texture and odor than it had been walking on previously.

	I suppose this is the %2de re%1 reference to bone.  We want
to say that the dog has a belief about that particular material
object as well as a desire to possess it.  We could notice that a
dog or person wanted an object without even having a name for it
ourselves.  For AI purposes, a system that could handle only such
references would be quite useful.  It seems to me, however, that
no-one has really formulated a language in which beliefs are ascribed
that covers these easy cases (exactly what cases?).  Of course, we
can hope that once we had a formulation of these cases, its coverage
would be somewhat wider or it could be readily modified to a somewhat
wider coverage.

	Your example of Lisa helped sharpen my opinion about what
cases should be covered.  Namely those cases in which the ascriber
and the person to whom the belief is ascribed have a sufficiently
close notion of what the noun phrases denote.  "Lisa believes
he is in San Diego" uses more than one intension for "he", since
Lisa has two notions that, unbeknownst to her, refer to the
same person.  An AI system that couldn't make this distinction
would fail in some way when the distinction was necessary.  However,
that wouldn't be anything new; AI systems are always failing when
they get outside their design limits.

	2. A defeasible presumption of simplicity.  Also the notion
that a sentence may not have a definite meaning in some circumstances.
These ideas occurred to me while listening to your response to questions
from the audience.  The second is simpler.  The sentence "Lisa believes
he is in San Diego" may be considered to not have a definite meaning
in the circumstances of your story.  This seems to me to be less
a matter of fact than a question of what is the most useful usage.
If I were designing a language for computers to use, I think it
might be convenient to leave that sentence meaningless.  Dennett
calls this attitude "taking the design stance", and I agree with
him that it is often useful in studying phenomena that arise by
evolution as well as purposeful designs.

	As you may remember I have been studying non-monotonic reasoning
and have a method called circumscription that allows one to infer
(conjecturally) that a predicate has a smallest extension compatible
with a certain theory.  One kind of presumption I would like to
be able to make includes presumptions of non-ambiguity.  I suspect
that people make such presumptions all the time and that such
presumptions play a role in what conclusions are subsequently
drawn from a collection of sentences.  For example, the presumption
that the Lisa sentence is unambiguous leads to the conclusion that
Lisa doesn't know what city she is in.

	I enclose two papers.  The first is the technical paper
on which my %2Psychology Today%1 article is based, and the second
is about circumscription.

.reg